ai principle
A Design Framework for operationalizing Trustworthy Artificial Intelligence in Healthcare: Requirements, Tradeoffs and Challenges for its Clinical Adoption
Moreno-Sánchez, Pedro A., Del Ser, Javier, van Gils, Mark, Hernesniemi, Jussi
Artificial Intelligence (AI) holds great promise for transforming healthcare, particularly in disease diagnosis, prognosis, and patient care. The increasing availability of digital medical data, such as images, omics, biosignals, and electronic health records, combined with advances in computing, has enabled AI models to approach expert-level performance. However, widespread clinical adoption remains limited, primarily due to challenges beyond technical performance, including ethical concerns, regulatory barriers, and lack of trust. To address these issues, AI systems must align with the principles of Trustworthy AI (TAI), which emphasize human agency and oversight, algorithmic robustness, privacy and data governance, transparency, bias and discrimination avoidance, and accountability. Yet, the complexity of healthcare processes (e.g., screening, diagnosis, prognosis, and treatment) and the diversity of stakeholders (clinicians, patients, providers, regulators) complicate the integration of TAI principles. To bridge the gap between TAI theory and practical implementation, this paper proposes a design framework to support developers in embedding TAI principles into medical AI systems. Thus, for each stakeholder identified across various healthcare processes, we propose a disease-agnostic collection of requirements that medical AI systems should incorporate to adhere to the principles of TAI. Additionally, we examine the challenges and tradeoffs that may arise when applying these principles in practice. To ground the discussion, we focus on cardiovascular diseases, a field marked by both high prevalence and active AI innovation, and demonstrate how TAI principles have been applied and where key obstacles persist.
- Europe > Finland (0.04)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Overview (1.00)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (1.00)
- (4 more...)
Humble AI in the real-world: the case of algorithmic hiring
Nair, Rahul, Vejsbjerg, Inge, Daly, Elizabeth, Varytimidis, Christos, Knowles, Bran
Humble AI (Knowles et al., 2023) argues for cautiousness in AI development and deployments through scepticism (accounting for limitations of statistical learning), curiosity (accounting for unexpected outcomes), and commitment (accounting for multifaceted values beyond performance). We present a real-world case study for humble AI in the domain of algorithmic hiring. Specifically, we evaluate virtual screening algorithms in a widely used hiring platform that matches candidates to job openings. There are several challenges in misrecognition and stereotyping in such contexts that are difficult to assess through standard fairness and trust frameworks; e.g., someone with a non-traditional background is less likely to rank highly. We demonstrate technical feasibility of how humble AI principles can be translated to practice through uncertainty quantification of ranks, entropy estimates, and a user experience that highlights algorithmic unknowns. We describe preliminary discussions with focus groups made up of recruiters. Future user studies seek to evaluate whether the higher cognitive load of a humble AI system fosters a climate of trust in its outcomes.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.06)
- North America > United States > New York > New York County > New York City (0.04)
- (5 more...)
- Questionnaire & Opinion Survey (0.91)
- Research Report (0.64)
Google defends scrapping AI pledges and DEI goals in all-staff meeting
Google's executives gave details on Wednesday on how the tech giant will sunset its diversity initiatives and defended dropping its pledge against building artificial intelligence for weaponry and surveillance in an all-staff meeting. Melonie Parker, Google's former head of diversity, said the company was doing away with its diversity and inclusion employee training programs and "updating" broader training programs that have "DEI content". It was the first time company executives have addressed the whole staff since Google announced it would no longer follow hiring goals for diversity and took down its pledge not to build militarized AI. The chief legal officer, Kent Walker, said a lot had changed since Google first introduced its AI principles in 2018, which explicitly stated Google would not build AI for harmful purposes. He said it would be "good for society" for the company to be part of evolving geopolitical discussions in response to a question about why the company removed prohibitions against building AI for weapons and surveillance.
- North America > United States (0.49)
- Asia > Middle East > Israel (0.16)
- Education (1.00)
- Law (0.92)
- Government > Regional Government > North America Government > United States Government (0.49)
- Government > Military (0.49)
Outrage as Google scraps its promise not to use AI for weapons or surveillance
Google has updated its AI ethical guidelines and removed a key pledge not to use the tech in a dangerous way. The company erased the 2018 pledge on Tuesday which stated the tech giant'would not use AI for weapons or surveillance'. The revised policy now shows that Google will only develop AI'responsibly' and in line with'widely accepted principles of international law and human rights.' Google's change has sparked internal backlash as employees called the move'deeply concerning' and that the company should not be involved in'the business of war.' Matt Mahmoudi, Amnesty adviser on AI and human rights, shamed Google for the move, saying the tech giant set a'dangerous precedent.' 'AI-powered technologies could fuel surveillance and lethal killing systems at a vast scale, potentially leading to mass violations and infringing on the fundamental right to privacy,' he added.
- Asia > Middle East > Israel (0.08)
- North America > United States > Pennsylvania (0.05)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.05)
- Law > International Law (0.59)
- Government > Military (0.57)
- Government > Regional Government > North America Government > United States Government (0.36)
Google drops pledge not to use AI for weapons, surveillance
Google has dropped a pledge not to use artificial intelligence for weapons or surveillance in its updated ethics policy on the powerful technology. In its previous version of "AI Principles", the California-based internet giant included a commitment not to pursue AI technologies that "cause or are likely to cause overall harm", including weapons and surveillance that violates "internationally accepted norms". Google's revised policy announced on Tuesday states that the company pursues AI "responsibly" and in line with "widely accepted principles of international law and human rights", but does not include the previous language about weapons or surveillance. "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights," Google DeepMind chief Demis Hassabis and research labs senior vice president James Manyika said in a blog post announcing the updated policy. "And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security."
- Law (1.00)
- Information Technology (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (0.78)
Google pledge against using AI for weapons vanishes
Google on Tuesday updated its principles when it comes to artificial intelligence, removing vows not to use the technology for weapons or surveillance. Revised AI principles were posted just weeks after Google chief executive Sundar Pichai and other tech titans attended the inauguration of U.S. President Donald Trump. When asked about the change, a Google spokesperson referred to a blog post outlining the company's AI principles that made no mention of the promises, which Pichai first outlined in 2018.
Google drops pledge on AI use for weapons
There is debate amongst AI experts and professionals over how the powerful new technology should be governed in broad terms, how far commercial gains should be allowed to determine its direction, and how best to guard against risks for humanity in general. There is also controversy around the use of AI on the battlefield and in surveillance technologies. The blog said the company's original AI principles published in 2018 needed to be updated as the technology had evolved. "Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organisations and individuals use to build applications.
- Information Technology (0.54)
- Government > Military (0.38)
Google now thinks it's OK to use AI for weapons and surveillance
Google has made one of the most substantive changes to its AI principles since first publishing them in 2018. In a change spotted by The Washington Post, the search giant edited the document to remove pledges it had made promising it would not "design or deploy" AI tools for use in weapons or surveillance technology. Previously, those guidelines included a section titled "applications we will not pursue," which is not present in the current version of the document. Instead, there's now a section titled "responsible development and deployment." There, Google says it will implement "appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights." That's a far broader commitment than the specific ones the company made as recently as the end of last month when the prior version of its AI principles was still live on its website.
- Information Technology > Services (0.97)
- Law > International Law (0.61)
DeepMind workers urge Google to drop military contracts
Google DeepMind workers have signed a letter calling on the company to drop contracts with military organizations, according to a report by Time. The document was drafted on May 16 of this year. Around 200 people signed the document, which amounts to five percent of the total headcount of DeepMind. For the uninitiated, DeepMind is one of Google's AI divisions and the letter states that adopting military contracts runs afoul of the company's own AI rules. The letter was sent out as internal concerns began circulating within the AI lab that the tech was allegedly being sold to military organizations via cloud contracts.
Exclusive: Workers at Google DeepMind Push Company to Drop Military Contracts
Nearly 200 workers inside Google DeepMind, the company's AI division, signed a letter calling on the tech giant to drop its contracts with military organizations earlier this year, according to a copy of the document reviewed by TIME and five people with knowledge of the matter. The letter circulated amid growing concerns inside the AI lab that its technology is being sold to militaries engaged in warfare, in what the workers say is a violation of Google's own AI rules. The letter is a sign of a growing dispute within Google between at least some workers in its AI division--which has pledged to never work on military technology--and its Cloud business, which has contracts to sell Google services, including AI developed inside DeepMind, to several governments and militaries including those of Israel and the United States. The signatures represent some 5% of DeepMind's overall headcount--a small portion to be sure, but a significant level of worker unease for an industry where top machine learning talent is in high demand. The DeepMind letter, dated May 16 of this year, begins by stating that workers are "concerned by recent reports of Google's contracts with military organizations."
- Asia > Middle East > Israel (0.76)
- North America > United States > California (0.05)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.05)